Adaptation in Weight Space through Gradient Descent for Hopfield Net as Static Optimizer: Is It Feasible?

نویسنده

  • GURSEL SERPEN
چکیده

This paper reports on results of an empirical simulation study for adaptation of weights through gradient descent for a Hopfield neural network configured as a static optimizer and tested on the traveling salesman problem. Adaptation through gradient descent within the context of recurrent and non-recurrent back-propagation training was attempted in the weight space, which is highdimensional, i.e. on the order of 1,000,000,000,000 weights for a twodimensional node array of the Hopfield network configured for a 1000-city problem instance. Ensuing substantial empirical work, practically no noteworthy progress could be recorded for realization of the adaptation in the weight space: the adaptation algorithm failed to locate a set of weight values that established the solutions of the traveling salesman problem instances as local minima in the Lyapunov space. Accordingly, the findings in this paper tend to suggest that alternate means of adaptation schemes with a small number of freely adjustable parameters should be considered. INTRODUCTION The Hopfield neural network offers a true “real-time” optimization algorithm for computation of a local optimum solution of static optimization problems [Smith, 1999]. The promise is a quick and local optimum solution that also scales with the size of the problem if and when the Hopfield network algorithm is implemented in hardware so as to take full advantage of the massive degree of parallelism. Determination of weights of the Hopfield network for a given static optimization problem has been the main obstacle towards this end since techniques that pre-compute weight values are problematic at best [Abe, 1989; Abe, 1993; Abe, 1996; Serpen, 2000] and no weight learning procedure was successfully demonstrated either [Sima, 2003; Atencia et al., 2005; Bournez et al., 2006]. A theoretical framework for an adaptive Hopfield network that attains its weight vector through training for a given static optimization problem was proposed in [Serpen, 2003]. One peculiar aspect of the Hopfield network is that the weight space is incredibly high-dimensional for a typical two-dimensional topology configured for static optimization. For instance, the weight space has O(N) dimensions, say for an N-vertex graph search problem as a generic example. Learning or training to compute a solution weight vector, i.e. through an algorithm like the gradient descent, in such a highdimensional space poses great challenges. Accordingly, empirical simulation studies are needed to assess if in fact it is feasible to perform the intended training in such high dimensional spaces. This paper presents, through a simulation study, an assessment of feasibility of an adaptation scheme in the weight space of Hopfield neural networks configured for static optimization. The theoretical and mathematical framework for this study is presented in

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adaptive Hopfield Network

This paper proposes an innovative enhancement of the classical Hopfield network algorithm (and potentially its stochastic derivatives) with an “adaptation mechanism” to guide the neural search process towards high-quality solutions for large-scale static optimization problems. Specifically, a novel methodology that employs gradient-descent in the error space to adapt weights and constraint weig...

متن کامل

A Heuristic and Its Mathematical Analogue within Artificial Neural Network Adaptation Context

This paper presents an observation on adaptation of Hopfield neural network dynamics configured as a relaxation-based search algorithm for static optimization. More specifically, two adaptation rules, one heuristically formulated and the second being gradient descent based, for updating constraint weighting coefficients of Hopfield neural network dynamics are discussed. Application of two adapt...

متن کامل

Generating Network Trajectories Using Gradient Descent in State Space

A local and simple learning algorithm is introduced that gradually minimizes an error function for neural states of a general network. Unlike standard backpropagation algorithms, it is based on linearizing the neurodynamics, which are interpreted as constraints for the different network variables. From the resulting equations, the weight update is deduced which has a minimal norm and produces s...

متن کامل

A Framework for Adapting Population-Based and Heuristic Algorithms for Dynamic Optimization Problems

In this paper, a general framework was presented to boost heuristic optimization algorithms based on swarm intelligence from static to dynamic environments. Regarding the problems of dynamic optimization as opposed to static environments, evaluation function or constraints change in the time and hence place of optimization. The subject matter of the framework is based on the variability of the ...

متن کامل

Reference-shaping adaptive control by using gradient descent optimizers

This study presents a model reference adaptive control scheme based on reference-shaping approach. The proposed adaptive control structure includes two optimizer processes that perform gradient descent optimization. The first process is the control optimizer that generates appropriate control signal for tracking of the controlled system output to a reference model output. The second process is ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007